The central theme at the 2024 World Economic Forum’s Annual Meeting was ‘Rebuilding Trust’.
Trust is an essential element when it comes to the acceptance or adoption of increasingly pervasive technologies, such as artificial intelligence (AI), in both business and wider society.
In recent times, with an increase in cyberattacks, geopolitical conflicts, dependence on digital systems and the integration of new, complex technologies, trust has often come under scrutiny.
In this Q&A, Ashish K. Gupta, Chief Growth Officer, Europe and Africa at HCLTech, discusses the potential of AI, how to ensure it’s adopted in an inclusive manner as a force for good and the importance of building trust to drive its widespread adoption.
As we look ahead to the rest of 2024 and beyond, what impact do you predict AI will have on industries and traditional working practices?
This year is all about AI, with a particular focus on generative AI (GenAI).
In the B2C context, AI or even GenAI has been around for some time. Apple’s Siri or Google’s Gmail with autotyping feature are examples of this. In the next 12-24 months, consumers will be exposed to an increase in GenAI products, which will accelerate adoption.
On the B2B side, the AI industry is guilty of overhyping generative AI. There are several areas, in the short term, where the impact will be significant, such as use cases in software engineering, developer productivity, customer support, content generation and even in sales and marketing, where teams can train large language models (LLMs) to write portions of the RFP (request for proposal).
The potential for all other areas is huge but all these use cases are in proof-of-concept stage. The road to adoption is a lot longer in the B2B environment.
GenAI is going to be transformative and the excitement around it can be compared with the one experienced during the dotcom boom. Back then, only the companies that worked beyond the hype and stayed the course were able to take advantage of the internet. The same will be true of GenAI, although in different timelines and contexts.
How can organizations ensure AI-driven innovation is adopted in an inclusive manner and as a force for good?
This is at the heart of the debate when it comes to AI—how do you ensure AI is a force for good, rather than a potential disruptor that’s going to cause huge havoc?
First and foremost, responsibility lies with the technology industry.
In the current environment, a small percentage of total investment in AI development is going toward alignment, which focuses on ensuring AI trained on LLMs are safe in different types of environments. More money is going toward the training of these models.
The balance is off and the technology industry or the people producing these technologies need to focus on more investment on alignment and making these models inherently safer and bias free.
The second place where the responsibility lies is with the organizations or industries that will adopt this technology. Whether directed at internal or external users, organizations need to establish guardrails and think carefully about the outcomes of technology adoption and ensure that—especially with AI models—they are trained on the right datasets that don’t have bias and are inclusive of different demographics.
In addition, organizations could establish ethics committees to review the training and outcomes of AI models to determine what is acceptable and what is not.
Finally, regulators and governments have an extremely critical role to play in the usage and adoption of this technology. They need to step up and play their role by setting clear guidelines for different industries to embrace.
When it comes to these types of 'pervasive' technologies, why is trust so important? How can trust be built or rebuilt?
Trust is very difficult to build, takes a second to lose, and takes more time and effort to rebuild it.
Like the fashion industry, technology too has witnessed trends after trends, with a lot of them not landing or creating the impact that we’re promising. Our job is to ensure that we keep the promises we make on delivering business impact with these emerging technology trends. This will help build or rebuild trust.
When dealing with technologies like AI, use cases are often at higher risk when dealing with people’s data or solutions that automatically make decisions for services like insurance policies or mortgages. It’s important to take extreme care here and ensure that any decisions are transparent and explainable. This, along with ensuring that humans are kept in the loop, is essential in maintaining trust.
And finally, there needs to be a lot more experimentation before rolling out high-risk AI use cases.
Looking ahead, are you optimistic that AI will play a positive role in business and society?
Absolutely. If applied correctly, this is a technology that can amplify human outcomes and human effort. While there’s a lot of hype around ChatGPT, companies like Google DeepMind have been using AI to identify different types of cancers and assist doctors with prevention and treatment of diseases. The use cases in healthcare are amazing and offer a great example of the power of this technology to do good in society.
We can't lose sight of the fact that with this powerful technology comes huge responsibility on all of us. We need to take that responsibility seriously.